Supplemental Material for ”Face2Face: Real-time Face Capture and Reenactment of RGB Videos”

نویسندگان

  • Justus Thies
  • Michael Zollhöfer
  • Marc Stamminger
  • Christian Theobalt
  • Matthias Nießner
چکیده

In this document, we provide supplementary information to the method by Thies et al. [4]. More specifically, we include additional detail about our optimization framework (see Section 1 and 2), and we show further comparisons against other methods (see Section 3). We also evaluate the reconstruction error in a self-reenactment scenario. In Section 4, a list of used mathematical symbols is given. The used video sources are listed in Table 1.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

IMU2Face: Real-time Gesture-driven Facial Reenactment

We present IMU2Face, a gesture-driven facial reenactment system. To this end, we combine recent advances in facial motion capture and inertial measurement units (IMUs) to control the facial expressions of a person in a target video based on intuitive hand gestures. IMUs are omnipresent, since modern smart-phones, smart-watches and drones integrate such sensors; e.g., for changing the orientatio...

متن کامل

FaceVR: Real-Time Facial Reenactment and Eye Gaze Control in Virtual Reality

We introduce FaceVR, a novel method for gaze-aware facial reenactment in the Virtual Reality (VR) context. The key component of FaceVR is a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos. In addition to these face reconstruction components, FaceVR in...

متن کامل

BMVA technical meeting: Dynamic Scene Reconstruction

Michael Zollhöfer started proceedings, presenting the achievements of his group at the Max Planck Institute for Informatics in real-time reconstruction, making the case for the importance of real-time performance for VR/AR applications. This included the parameterised face model behind the famous face2face demo featuring George W. Bush and others, enabling real-time reconstruction and re-target...

متن کامل

Real-Time Facial Segmentation and Performance Capture from RGB Input

We introduce the concept of unconstrained real-time 3D facial performance capture through explicit semantic segmentation in the RGB input. To ensure robustness, cutting edge supervised learning approaches rely on large training datasets of face images captured in the wild. While impressive tracking quality has been demonstrated for faces that are largely visible, any occlusion due to hair, acce...

متن کامل

Real-time 3D Face Reconstruction and Gaze Tracking for Virtual Reality

With the rapid development of virtual reality (VR) technology, VR glasses, a.k.a. Head-Mounted Displays (HMDs) are widely available, allowing immersive 3D content to be viewed. A natural need for truly immersive VR is to allow bidirectional communication: the user should be able to interact with the virtual world using facial expressions and eye gaze, in addition to traditional means of interac...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016